4,523 research outputs found

    Wideband Planar Plate Monopole Antenna

    Get PDF
    Non

    Nanolithography Study Using Scanning Probe Microscope

    Get PDF

    Extremal Black Attractors in 8D Maximal Supergravity

    Full text link
    Motivated by the new higher D-supergravity solutions on intersecting attractors obtained by Ferrara et al. in [Phys.Rev.D79:065031-2009], we focus in this paper on 8D maximal supergravity with moduli space [SL(3,R)/SO(3)]x[SL(2,R)/SO(2)] and study explicitly the attractor mechanism for various configurations of extremal black p- branes (anti-branes) with the typical near horizon geometries AdS_{p+2}xS^{m}xT^{6-p-m} and p=0,1,2,3,4; 2<=m<=6. Interpretations in terms of wrapped M2 and M5 branes of the 11D M-theory on 3-torus are also given. Keywords: 8D supergravity, black p-branes, attractor mechanism, M-theory.Comment: 37 page

    BASE DEFICIT IN IMMEDIATE POSTOPERATIVE PERIOD OF OPEN HEART SURGERY AND OUTCOME OF PATIENTS

    Get PDF
    Abstract- Base deficit is a non-respiratory indicator of acid base status. Aim of this study is to assess relationship between the base deficit value in immediate post operative period of CABG and valvular heart disease with cardiopulmonary and in hospital outcome of patient. A total of 136 consecutive with CABG and valvular heart disease scheduled in study. 20 variables were determined during the pre-intraand postoperative period. Statistical univariate analysis was performed differentiating patients whose initial base deficit after weaning from cardiopulmonary bypass was -8 meq and these whose base deficit was equal or more than -8 meq. Secondly a logistic regression model was performed on the variables shown to have a statistically significant difference in univariate analysis with determination of the odd ratio. 3 variables had a statistically significant difference in univariate analysis and 2 of them high lighted by the linear logistic model. The value of base deficit measured during the immediate postoperative open-heart surgery is correlated with volume of fresh frozen plasma and blood transfusion after open heart surgery and using of intra aortic balloon pump after surgery

    Reed-Muller codes for random erasures and errors

    Full text link
    This paper studies the parameters for which Reed-Muller (RM) codes over GF(2)GF(2) can correct random erasures and random errors with high probability, and in particular when can they achieve capacity for these two classical channels. Necessarily, the paper also studies properties of evaluations of multi-variate GF(2)GF(2) polynomials on random sets of inputs. For erasures, we prove that RM codes achieve capacity both for very high rate and very low rate regimes. For errors, we prove that RM codes achieve capacity for very low rate regimes, and for very high rates, we show that they can uniquely decode at about square root of the number of errors at capacity. The proofs of these four results are based on different techniques, which we find interesting in their own right. In particular, we study the following questions about E(m,r)E(m,r), the matrix whose rows are truth tables of all monomials of degree r\leq r in mm variables. What is the most (resp. least) number of random columns in E(m,r)E(m,r) that define a submatrix having full column rank (resp. full row rank) with high probability? We obtain tight bounds for very small (resp. very large) degrees rr, which we use to show that RM codes achieve capacity for erasures in these regimes. Our decoding from random errors follows from the following novel reduction. For every linear code CC of sufficiently high rate we construct a new code CC', also of very high rate, such that for every subset SS of coordinates, if CC can recover from erasures in SS, then CC' can recover from errors in SS. Specializing this to RM codes and using our results for erasures imply our result on unique decoding of RM codes at high rate. Finally, two of our capacity achieving results require tight bounds on the weight distribution of RM codes. We obtain such bounds extending the recent \cite{KLP} bounds from constant degree to linear degree polynomials

    Electric Current Focusing Efficiency in Graphene Electric Lens

    Full text link
    In present work, we theoretically study the electron wave's focusing phenomenon in a single layered graphene pn junction(PNJ) and obtain the electric current density distribution of graphene PNJ, which is in good agreement with the qualitative result in previous numerical calculations [Science, 315, 1252 (2007)]. In addition, we find that for symmetric PNJ, 1/4 of total electric current radiated from source electrode can be collected by drain electrode. Furthermore, this ratio reduces to 3/16 in a symmetric graphene npn junction. Our results obtained by present analytical method provide a general design rule for electric lens based on negative refractory index systems.Comment: 13 pages, 7 figure

    The diagonal Ising susceptibility

    Full text link
    We use the recently derived form factor expansions of the diagonal two-point correlation function of the square Ising model to study the susceptibility for a magnetic field applied only to one diagonal of the lattice, for the isotropic Ising model. We exactly evaluate the one and two particle contributions χd(1)\chi_{d}^{(1)} and χd(2)\chi_{d}^{(2)} of the corresponding susceptibility, and obtain linear differential equations for the three and four particle contributions, as well as the five particle contribution χd(5)(t){\chi}^{(5)}_d(t), but only modulo a given prime. We use these exact linear differential equations to show that, not only the russian-doll structure, but also the direct sum structure on the linear differential operators for the n n-particle contributions χd(n)\chi_{d}^{(n)} are quite directly inherited from the direct sum structure on the form factors f(n) f^{(n)}. We show that the nth n^{th} particle contributions χd(n)\chi_{d}^{(n)} have their singularities at roots of unity. These singularities become dense on the unit circle sinh2Ev/kTsinh2Eh/kT=1|\sinh2E_v/kT \sinh 2E_h/kT|=1 as n n\to \infty.Comment: 18 page

    Kinetic Study of Esterification Reaction

    Get PDF
    The Esterification kinetics of acetic acid with ethanol in the presence of sulfuric acid as a homogenous catalyst was studied with isothermal batch experiments at 50-60°C and at a different molar ratio of ethanol to acetic acid [EtOH/Ac]. Investigation of kinetics of the reaction indicated that the low of [EtOH/Ac] molar ratio is favored for esterification reaction, this is due to the reaction is catalyzed by acid. The maximum conversion, approximately 80% was obtained at 60°C for molar ratio of 10 EtOH/Ac. It was found that increasing temperature of the reaction, increases the rate constant and conversion at a certain mole ratio, that is due to the esterification is exothermic. Activity coefficients were calculated using UNIFAC program. Results showed deviation in activation energy in the non-ideal system of about 20% this is due to the polarities of water and ethanol compared to the non-polar ethyl acetate this dissimilarity leading to strong non- ideal behavior. The homogenous reaction has been described with simple power-law model. The chemical equilibrium combustion calculated form the kinetic model in agreement with the measured chemical equilibrium

    Statistical physics-based reconstruction in compressed sensing

    Full text link
    Compressed sensing is triggering a major evolution in signal acquisition. It consists in sampling a sparse signal at low rate and later using computational power for its exact reconstruction, so that only the necessary information is measured. Currently used reconstruction techniques are, however, limited to acquisition rates larger than the true density of the signal. We design a new procedure which is able to reconstruct exactly the signal with a number of measurements that approaches the theoretical limit in the limit of large systems. It is based on the joint use of three essential ingredients: a probabilistic approach to signal reconstruction, a message-passing algorithm adapted from belief propagation, and a careful design of the measurement matrix inspired from the theory of crystal nucleation. The performance of this new algorithm is analyzed by statistical physics methods. The obtained improvement is confirmed by numerical studies of several cases.Comment: 20 pages, 8 figures, 3 tables. Related codes and data are available at http://aspics.krzakala.or
    corecore